Chat with us, powered by LiveChat All RISK requires consideration in the following five areas of a business: corporate strategy, supply chain organization, process management, performance metrics, and information & tech - EssayAbode

All RISK requires consideration in the following five areas of a business: corporate strategy, supply chain organization, process management, performance metrics, and information & tech

Read the attached case study and answer the below questions. The answer must follow the professor's instructions with APA Format, Appendix, a Cover page, content, and more than 6 references. and No Plagiarism. APA format reference URL:  https://owl.purdue.edu/owl/research_and_citation/apa_style/apa_formatting_and_style_guide/general_format.html 
Each Final Paper will include a review of the database provided to each student, who will then answer the following four questions (note: each question requires a definite understanding of your database.):

  1. 1. All RISK requires consideration in the following five areas of a business: corporate strategy, supply chain organization, process management, performance metrics, and information & technology. 
    1. a. What could be the main "risk" for your database?
    2. b. Of the four possible solutions for the "risk", which is most applicable to the "risk" you selected?
  2. 2. Use the "risk" selected in question 1 and build a risk assessment matrix explaining each selection made for column headings.
  3. 3. If you could locate the database firm (your selection) and consider it a development company, anywhere in the world, which place would you choose and why? You must consider the Supply Chain impacts.
  4. 4. Can the Supply Chain for your company (database) become too lean? Explain and provide supporting references.

BONSUS COMPOUND QUESTION:In March 2021, the Suez Canal was blocked for six days after the grounding of Ever Given, a 20,000 TEU container ship. The 400-metre-long (1,300 ft) vessel was buffeted by strong winds on the morning of 23 March and ended up wedged across the waterway with its bow and stern stuck in the canal banks.a. Do the colors of the containers have a significance?b. How does a company know which container is theirs?c. How can you identify a refrigerated container from a plain container?Provide support for your responses. 

Course Final Grade Paper

Enclosed are the requirements for the paper that is the major part of your grade. Please note that formatting of the paper is a critical aspect of the grade. 

Grading of paper will be as follows: 

1. Content = 60% 

2. APA Research Paper Formatting = 20% 

3. A minimum of five references = 10% 

4. Page count is a minimum of 6 pages NOT COUNTING COVER PAGE, REFERNCES and APPENDIX. = 10% 

5. Grammar, spelling will be part of the content grade. 

6. There are four specific questions to respond too for your particular Case Study. 

a. To respond you must read and understand the information provided in the Case Study. (This may require reading the CS and taking notes several times.) 

b. Your responses to utilize subject covered in lecture.

Document

Final Paper Four Questions + Bonus Question

Each Final Paper will include a review of the database provided to each student, who will then answer the following four questions (note: each question requires a definite understanding of your database.):

1. All RISK requires consideration in the following five areas of a business: corporate strategy, supply chain organization, process management, performance metrics and information & technology. 

a. What could be the main "risk" for your database? b. Of the four possible solutions for the "risk", which is most applicable to the "risk" you selected?

2. Use the "risk" selected in question 1 and build a risk-assessment-matrix explaining each selection made for column headings.

3. If you could locate the database firm (your selection) and consider it a development company, anywhere in the world, which place would you choose and why? You must consider the Supply Chain impacts.

4. Can the Supply Chain for your company (database) become too lean? Explain and provide supporting references.

BONSUS COMPOUND QUESTION:

 In March 2021, the Suez Canal was blocked for six days after the grounding of Ever Given, a 20,000 TEU container ship. The 400-metre-long (1,300 ft) vessel was buffeted by strong winds on the morning of 23 March and ended up wedged across the waterway with its bow and stern stuck in the canal banks.

a. Do the colors of the containers have a significance?

b. How does a company know which container is theirs?

c. How can you identify a refrigerated container from a plain container?

Provide support for your responses.

××

,

Document

APA Research Paper Format Details

Use this website and follow instructions for formatting APA Research paper. Please note that this provides, font size, font selection, line spacing and all details of paper headings and necessary inclusions;

https://owl.purdue.edu/owl/research_and_citation/apa_style/apa_formatting_and_style_guide/general_format.html

××

,

Document

Professor Suggestions for the Highest Possible Grade

1. Utilize elements discussed regarding "RISK", types and approaches for analysis. 2. Provide "your" best opinion and support it with at five are more references. 3. Paper must have complete thoughts for each question. 4. It is recommended that "charting (those we have discussed, Gant, Prato, histogram, flow chart, etc.)" be

used in your responses as a graphic demonstration of your answer. 5. The "Conclusion" for your paper must reflect your understanding of "risk" as it applies to "Supply Chain

Management" for the specific Case Study you have been assigned.

××

,

1

Branching Paths: A Novel Teacher Evaluation Model for Faculty Development

James P. Bavis and Ahn G. Nu

Department of English, Purdue University

ENGL 101: First Year Writing

Dr. Richard Teeth

January 30, 2020

Commented [AF1]: At the top of the page you’ll see the header, which does not include a running head for student papers (a change from APA 6). Page numbers begin on the first page and follow on every subsequent page without interruption. No other information (e.g., authors' last names) is required. Note: your instructor may ask for a running head or your last name before the page number. You can look at the APA professional sample paper for guidelines on these.

Commented [AF2]: The paper's title should be centered, bold, and written in title case. It should be three or four lines below the top margin of the page. In this sample paper, we've put four blank lines above the title.

Commented [AF3]: Authors' names are written below the title, with one double-spaced blank line between them. Names should be written as follows: First name, middle initial(s), last name.

Commented [AF4]: Authors' affiliations follow immediately after their names. For student papers, these should usually be the department containing the course for which the paper is being written.

Commented [AWC5]: Note that student papers in APA do not require author notes, abstracts, or keywords, which would normally fall at the bottom of the title page and on the next page afterwards. Your instructor may ask for them anyway — see the APA professional sample paper on our site for guidelines for these.

Commented [AF6]: Follow authors' affiliations with the number and name of the course, the instructor's name and title, and the assignment's due date.

2

Branching Paths: A Novel Teacher Evaluation Model for Faculty Development

According to Theall (2017), “Faculty evaluation and development cannot be considered

separately… evaluation without development is punitive, and development without evaluation is

guesswork” (p.91). As the practices that constitute modern programmatic faculty development

have evolved from their humble beginnings to become a commonplace feature of university life

(Lewis, 1996), a variety of tactics to evaluate the proficiency of teaching faculty for development

purposes have likewise become commonplace. These include measures as diverse as peer

observations, the development of teaching portfolios, and student evaluations.

One such measure, the student evaluation of teacher (SET), has been virtually ubiquitous

since at least the 1990s (Wilson, 1998). Though records of SET-like instruments can be traced to

work at Purdue University in the 1920s (Remmers & Brandenburg, 1927), most modern histories

of faculty development suggest that their rise to widespread popularity went hand-in-hand with

the birth of modern faculty development programs in the 1970s, when universities began to

adopt them in response to student protest movements criticizing mainstream university curricula

and approaches to instruction (Gaff & Simpson, 1994; Lewis, 1996; McKeachie, 1996). By the

mid-2000s, researchers had begun to characterize SETs in terms like “…the predominant measure

of university teacher performance […] worldwide” (Pounder, 2007, p. 178). Today, SETs play an

important role in teacher assessment and faculty development at most universities (Davis, 2009).

Recent SET research practically takes the presence of some form of this assessment on most

campuses as a given. Spooren et al. (2017), for instance, merely note that that SETs can be found

at “almost every institution of higher education throughout the world” (p. 130). Similarly,

Darwin (2012) refers to teacher evaluation as an established orthodoxy, labeling it a “venerated,”

“axiomatic” institutional practice (p. 733).

Commented [AF7]: The paper's title is bolded and centered above the first body paragraph. There should be no "Introduction" header.

Commented [AWC8]: Here, we've borrowed a quote from an external source, so we need to provide the location of the quote in the document (in this case, the page number) in the parenthetical.

Commented [AWC9]: By contrast, in this sentence, we've merely paraphrased an idea from the external source. Thus, no location or page number is required. You can cite a page range if it will help your reader find the section of source material you are referring to, but you don’t need to, and sometimes it isn’t practical (too large of a page range, for instance).

Commented [AWC10]: Spell out abbreviations the first time you use them, except in cases where the abbreviations are very well- known (e.g., "CIA").

Commented [AWC11]: For sources with two authors, use an ampersand (&) between the authors' names rather than the word "and."

Commented [AWC12]: When listing multiple citations in the same parenthetical, list them alphabetically and separate them with semicolons.

3

Moreover, SETs do not only help universities direct their faculty development efforts.

They have also come to occupy a place of considerable institutional importance for their role in

personnel considerations, informing important decisions like hiring, firing, tenure, and

promotion. Seldin (1993, as cited in Pounder, 2007) finds that 86% of higher educational

institutions use SETs as important factors in personnel decisions. A 1991 survey of department

chairs found 97% used student evaluations to assess teaching performance (US Department of

Education). Since the mid-late 1990s, a general trend towards comprehensive methods of teacher

evaluation that include multiple forms of assessment has been observed (Berk, 2005). However,

recent research suggests the usage of SETs in personnel decisions is still overwhelmingly

common, though hard percentages are hard to come by, perhaps owing to the multifaceted nature

of these decisions (Boring et al., 2017; Galbraith et al., 2012). In certain contexts, student

evaluations can also have ramifications beyond the level of individual instructors. Particularly as

public schools have experienced pressure in recent decades to adopt neoliberal, market-based

approaches to self-assessment and adopt a student-as-consumer mindset (Darwin, 2012;

Marginson, 2009), information from evaluations can even feature in department- or school-wide

funding decisions (see, for instance, the Obama Administration’s Race to the Top initiative,

which awarded grants to K-12 institutions that adopted value-added models for teacher

evaluation).

However, while SETs play a crucial role in faulty development and personnel decisions

for many education institutions, current approaches to SET administration are not as well-suited

to these purposes as they could be. This paper argues that a formative, empirical approach to

teacher evaluation developed in response to the demands of the local context is better-suited for

helping institutions improve their teachers. It proposes the Heavilon Evaluation of Teacher, or

Commented [AWC13]: Here, we've made an indirect or secondary citation (i.e., we've cited a source that we found cited in a different source). Use the phrase "as cited in" in the parenthetical to indicate that the first-listed source was referenced in the second-listed one. Include an entry in the reference list only for the secondary source (Pounder, in this case).

Commented [AWC14]: Here, we've cited a source that has an institution as author rather than one named person. The corresponding reference list entry would begin with "US Department of Education."

Commented [AWC15]: Sources with three authors or more are cited via the first-listed author's name followed by the Latin phrase "et al." Note that the period comes after "al," rather than "et."

4

HET, a new teacher assessment instrument that can strengthen current approaches to faculty

development by making them more responsive to teachers’ local contexts. It also proposes a pilot

study that will clarify the differences between this new instrument and the Introductory

Composition at Purdue (ICaP) SET, a more traditional instrument used for similar purposes. The

results of this study will direct future efforts to refine the proposed instrument. Methods section,

which follows, will propose a pilot study that compares the results of the proposed instrument to

the results of a traditional SET (and will also provide necessary background information on both

of these evaluations). The paper will conclude with a discussion of how the results of the pilot

study will inform future iterations of the proposed instrument and, more broadly, how

universities should argue for local development of assessments.

Literature Review

Effective Teaching: A Contextual Construct

The validity of the instrument this paper proposes is contingent on the idea that it is

possible to systematically measure a teacher’s ability to teach. Indeed, the same could be said for

virtually all teacher evaluations. Yet despite the exceeding commonness of SETs and the faculty

development programs that depend on their input, there is little scholarly consensus on precisely

what constitutes “good” or “effective” teaching. It would be impossible to review the entire

history of the debate surrounding teaching effectiveness, owing to its sheer scope—such a

summary might need to begin with, for instance, Cicero and Quintilian. However, a cursory

overview of important recent developments (particularly those revealed in meta-analyses of

empirical studies of teaching) can help situate the instrument this paper proposes in relevant

academic conversations.

Commented [AF16]: Common paper sections (literature review, methods, results, discussion) typically use Level 1 headings, like this one does. Level 1 headings are centered, bolded, and use title case. Text begins after them as a new paragraph.

Commented [AF17]: This is a Level 2 heading: left aligned, bolded, title case. Text begins as a new paragraph after this kind of heading.

5

Meta-analysis 1

One core assumption that undergirds many of these conversations is the notion that good

teaching has effects that can be observed in terms of student achievement. A meta-analysis of

167 empirical studies that investigated the effects of various teaching factors on student

achievement (Kyriakides et al., 2013) supported the effectiveness of a set of teaching factors that

the authors group together under the label of the “dynamic model” of teaching. Seven of the

eight factors (Orientation, Structuring, Modeling, Questioning, Assessment, Time Management,

and Classroom as Learning Environment) corresponded to moderate average effect sizes (of

between 0.34–0.41 standard deviations) in measures of student achievement. The eighth factor,

Application (defined as seatwork and small-group tasks oriented toward practice of course

concepts), corresponded to only a small yet still significant effect size of 0.18. The lack of any

single decisive factor in the meta-analysis supports the idea that effective teaching is likely a

multivariate construct. However, the authors also note the context-dependent nature of effective

teaching. Application, the least-important teaching factor overall, proved more important in

studies examining young students (p. 148). Modeling, by contrast, was especially important for

older students.

Meta-analysis 2

A different meta-analysis that argues for the importance of factors like clarity and setting

challenging goals (Hattie, 2009) nevertheless also finds that the effect sizes of various teaching

factors can be highly context-dependent. For example, effect sizes for homework range from

0.15 (a small effect) to 0.64 (a moderately large effect) based on the level of education examined.

Similar ranges are observed for differences in academic subject (e.g., math vs. English) and

student ability level. As Snook et al. (2009) note in their critical response to Hattie, while it is

Commented [AF18]: This is an example of a Level 3 heading: left aligned, bolded and italicized, and using title case. Text starts as a new paragraph after this. Most papers only use these three levels of headings; a fourth and fifth level are listed on the OWL in the event that you need them. Many student papers, however, don’t need more than a title and possibly Level 1 headings if they are short. If you’re not sure about how you should use headings in your paper, you can talk with your teacher about it and get advice for your specific case.

Commented [AWC19]: When presenting decimal fractions, put a zero in front of the decimal if the quantity is something that can exceed one (like the number of standard deviations here). Do not put a zero if the quantity cannot exceed one (e.g., if the number is a proportion).

6

possible to produce a figure for the average effect size of a particular teaching factor, such

averages obscure the importance of context.

Meta-analysis 3

A final meta-analysis (Seidel & Shavelson, 2007) found generally small average effect

sizes for most teaching factors—organization and academic domain- specific learning activities

showed the biggest cognitive effects (0.33 and 0.25, respectively). Here, again, however,

effectiveness varied considerably due to contextual factors like domain of study and level of

education in ways that average effect sizes do not indicate.

These pieces of evidence suggest that there are multiple teaching factors that produce

measurable gains in student achievement and that the relative importance of individual factors

can be highly dependent on contextual factors like student identity. This is in line with a well-

documented phenomenon in educational research that complicates attempts to measure teaching

effectiveness purely in terms of student achievement. This is that “the largest source of variation

in student learning is attributable to differences in what students bring to school – their abilities

and attitudes, and family and community” (McKenzie et al., 2005, p. 2). Student achievement

varies greatly due to non-teacher factors like socio-economic status and home life (Snook et al.,

2009). This means that, even to the extent that it is possible to observe the effectiveness of

certain teaching behaviors in terms of student achievement, it is difficult to set generalizable

benchmarks or standards for student achievement. Thus is it also difficult to make true apples-to-

apples comparisons about teaching effectiveness between different educational contexts: due to

vast differences between different kinds of students, a notion of what constitutes highly effective

teaching in one context may not in another. This difficulty has featured in criticism of certain

meta-analyses that have purported to make generalizable claims about what teaching factors

7

produce the biggest effects (Hattie, 2009). A variety of other commentators have also made

similar claims about the importance of contextual factors in teaching effectiveness for decades

(see, e.g., Bloom et al., 1956; Cashin, 1990; Theall, 2017).

The studies described above mainly measure teaching effectiveness in terms of academic

achievement. It should certainly be noted that these quantifiable measures are not generally

regarded as the only outcomes of effective teaching worth pursuing. Qualitative outcomes like

increased affinity for learning and greater sense of self-efficacy are also important learning goals.

Here, also, local context plays a large role.

SETs: Imperfect Measures of Teaching

As noted in this paper’s introduction, SETs are commonly used to assess teaching

performance and inform faculty development efforts. Typically, these take the form of an end-of-

term summative evaluation comprised of multiple-choice questions (MCQs) that allow students

to rate statements about their teachers on Likert scales. These are often accompanied with short-

answer responses which may or may not be optional.

SETs serve important institutional purposes. While commentators have noted that there

are crucial aspects of instruction that students are not equipped to judge (Benton & Young,

2018), SETs nevertheless give students a rare institutional voice. They represent an opportunity

to offer anonymous feedback on their teaching experience and potentially address what they

deem to be their teacher’s successes or failures. Students are also uniquely positioned to offer

meaningful feedback on an instructors’ teaching because they typically have much more

extensive firsthand experience of it than any other educational stakeholder. Even peer observers

only witness a small fraction of the instructional sessions during a given semester. Students with

Commented [AWC20]: To list a few sources as examples of a larger body of work, you can use the word "see" in the parenthetical, as we've done here.

8

perfect attendance, by contrast, witness all of them. Thus, in a certain sense, a student can

theoretically assess a teacher’s ability more authoritatively than even peer mentors can.

While historical attempts to validate SETs have produced mixed results, some studies

have demonstrated their promise. Howard (1985), for instance, finds that SET are significantly

more predictive of teaching effectiveness than self-report, peer, and trained-observer

assessments. A review of several decades of literature on teaching evaluations (Watchel, 1998)

found that a majority of researchers believe SETs to be generally valid and reliable, despite

occasional misgivings. This review notes that even scholars who support SETs frequently argue

that they alone cannot direct efforts to improve teaching and that multiple avenues of feedback

are necessary (L’hommedieu et al., 1990; Seldin, 1993).

Finally, SETs also serve purposes secondary to the ostensible goal of improving

instruction that nonetheless matter. They can be used to bolster faculty CVs and assign

departmental awards, for instance. SETs can also provide valuable information unrelated to

teaching. It would be hard to argue that it not is useful for a teacher to learn, for example, that a

student finds the class unbearably boring, or that a student finds the teacher’s personality so

unpleasant as to hinder her learning. In short, there is real value in understanding students’

affective experience of a particular class, even in cases when that value does not necessarily lend

itself to firm conclusions about the teacher’s professional abilities.

However, a wealth of scholarly research has demonstrated that SETs are prone to fail in

certain contexts. A common criticism is that SETs can frequently be confounded by factors

external to the teaching construct. The best introduction to the research that serves as the basis

for this claim is probably Neath (1996), who performs something of a meta-analysis by

presenting these external confounds in the form of twenty sarcastic suggestions to teaching

9

faculty. Among these are the instructions to “grade leniently,” “administer ratings before tests”

(p. 1365), and “not teach required courses” (#11) (p. 1367). Most of Neath’s advice reflects an

overriding observation that teaching evaluations tend to document students’ affective feelings

toward a class, rather than their teachers’ abilities, even when the evaluations explicitly ask

students to judge the latter.

Beyond Neath, much of the available research paints a similar picture. For example, a

study of over 30,000 economics students concluded that “the poorer the student considered his

teacher to be [on an SET], the more economics he understood” (Attiyeh & Lumsden, 1972). A

1998 meta-analysis argued that “there is no evidence that the use of teacher ratings improves

learning in the long run” (Armstrong, 1998, p. 1223). A 2010 National Bureau of Economic

Research study found that high SET scores for a course’s instructor correlated with “high

contemporaneous course achievement,” but “low follow-on achievement” (in other words, the

students would tend to do well in the course, but poor in future courses in the same field of study.

Others observing this effect have suggested SETs reward a pandering, “soft-ball” teaching style

in the initial course (Carrell & West, 2010). More recent research suggests that course topic can

have a significant effect on SET scores as well: teachers of “quantitative courses” (i.e., math-

focused classes) tend to receive lower evaluations from students than their humanities peers (Uttl

& Smibert, 2017).

Several modern SET studies have also demonstrated bias on the basis of gender

(Anderson & Miller, 1997; Basow, 1995), physical appearance/sexiness (Ambady & Rosenthal,

1993), and other identity markers that do not affect teaching quality. Gender, in particular, has

attracted significant attention. One recent study examined two online classes: one in which

instructors identified themselves to students as male, and another in which they identified as

Commented [AWC21]: This citation presents quotations from different locations in the original source. Each quotation is followed by the corresponding page number.

10

female (regardless of the instructor’s actual gender) (Macnell et al., 2015). The classes were

identical in structure and content, and the instructors’ true identities were concealed from

students. The study found that students rated the male identity higher on average. However, a

few studies have demonstrated the reverse of the gender bias mentioned above (that is, women

received higher scores) (Bachen et al., 1999) while others have registered no gender bias one

way or another (Centra & Gaubatz, 2000).

The goal of presenting these criticisms is not necessarily to diminish the institutional

importance of SETs. Of course, insofar as institutions value the instruction of their students, it is

important that those students have some say in the content and character of that instruction.

Rather, the goal here is simply to demonstrate that using SETs for faculty development

purposes—much less for personnel decisions—can present problems. It is also to make the case

that, despite the abundance of literature on SETs, there is still plenty of room for scholarly

attempts to make these instruments more useful.

Empirical Scales and Locally-Relevant Evaluation

One way to ensure that teaching assessments are more responsive to the demands of

teachers’ local contexts is to develop those assessments locally, ideally via a process that

involves the input of a variety of local stakeholders. Here, writing assessment liter

Related Tags

Academic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing