04 Jul In week 1, you selected a topic and developed a research question for that topic. Then, you developed a data gathering instrument to measure the question either quantitativ
In week 1, you selected a topic and developed a research question for that topic. Then, you developed a data gathering instrument to measure the question either quantitatively or qualitatively. Now that you have had the opportunity to read how scholarly methodologies are written, you will write a condensed 3-4 page methodology section for your research question using the required headings from the University of the Cumberlands Dissertation Handbook. Like we discuss in class, each university has unique parameters for what they expect in chapter 3, so you may see papers from other universities that look slightly different. The importance here is to focus on the content, not necessarily the organization. This assignment will help determine your readiness to write a full-length chapter three.Directions:
- Review your notes from class on the different methodologies and instruments used to measure. Also, review the examples:
- Approaches Expectations.docx
- Meets Expectations.docx
- Exceeds Expectations.docx
- Finally, review the rubric: Rubric for Methodology.docx
2. Develop a 3-4 page (more is fine) methodology section that includes the following:
- Introduction
- Research Paradigm (qualitative or quantitative) Notes: Choose Qualitative or Quantitative based on what methodology you plan to use for your actual dissertation. You may not choose to do both qualitative and quantitative (mixed-methods)
- Research- or project- Design
- Sampling Procedures and
- Data Collection Sources
- Statistical Tests Summary (quantitative) OR Data Organization Plan (Qualitative). Notes: If you chose a quantitative research paradigm, you must choose a quantitative statistical test summary option in this section. If you chose a qualitative research paradigm, you must choose the qualitative organization plan option in this section.
3. Upload your methodology section to this assignment box.
Meets Expectations
Chapter Three: Methodology
Meets Expectations
Introduction
Servant leadership has been studied in multiple contexts since it was introduced as a leadership theory by Greenleaf (1970, 1977). This study sought to contextualize the servant leadership of a college president and the diffusion of that leadership style to other levels of the college administration by career administrators in higher education. The primary research question in this study was: To what extent does servant leadership by a college president diffuse to other leaders at the institution? Based on the literature (Liden et al., 2014; Newman et al., 2018) one would expect to see some sort of conveyance of servant leadership to lower echelons of the institutional administration given that servant leaders attempt to empower their followers as leaders (Jeyaraj & Gandolfi, 2019).
Research Paradigm
Using Liu’s (2019) methodology as a loose guide, this study used a qualitative research paradigm. Qualitative research attempts to understand how individuals or a group perceive, experience, and engage with a phenomenon or problem (Creswell & Creswell, 2018). Specifically, this study took a phenomenological approach which allowed the participants to expound upon their lived experiences of the phenomenon of servant leadership and its diffusion through the administration of an institution of higher education. Phenomenological research seeks to explore the lived experiences of humans and how they make sense of those experiences (Moustakas, 1994).
Research Design
In her study of servant leadership by one leader in a large information technology company, Liu (2019) designed a research project focused on the leader with additional interviews of the members of his team. This study adopted a similar approach with modifications given the difference in studies and what is being studied. Liu (2019) was concerned with the intersection of race, gender, and servant leadership whereas this study focused on the conveyance of the principles of servant leadership throughout an organization. Phenomenological research considers the primary source of information to be the perceptions and disclosures of the participants (Moustakas, 1994). Two methods of data collection were used: semi-structured interviews and observations.
Over the course of a month, four approximately two-hour interviews were conducted with the president of Connelly College (the principal subject). The initial interview was semi-structured and sought to collect background information on and thoughts about the subject’s background and his development as a servant leader. Subsequent interviews were casual and unstructured focusing on thoughts from previous interviews, themes arising in other interviews (see below), and comments on current events relevant to leadership in higher education. Because of the informal nature of these interviews it was impossible to present a formal interview protocol, however, please refer to Appendix C for an informal protocol and guiding thoughts.
In addition to interviews with the college president, approximately 90-minute interviews were conducted with five members of the president’s cabinet including administrators responsible for finance, human resources, academics, ministry and mission, and student affairs. These interviews were semi-structured and focused on the administrator’s thoughts of the president as a servant leader and if and how they in turn enact servant leadership in their own areas. Please refer to Appendix D for an informal interview protocol for these interviews.
A third and final category of interviews were conducted with one administrator each of whom reports to one of the five administrators identified above. These interviews were approximately 90-minutes in length and followed a semi-structured approach. The same informal interview protocol found in Appendix D was used in these interviews. In addition to interviews, field observations were gathered from attendance at two president’s cabinet meetings.
Research Site and Data Collection Sources
The research and primary data collection site for this study was Connelly College. Several interviews occurred at nearby restaurants or cafes as needed to accommodate participant schedules. Connelly College is a very small, coeducational, and faith-based college classified as a masters two institution by the Carnegie Classification (need citation). Located in a suburban area of a Mid-Atlantic state, Connelly serves a diverse population with a significant population of first-generation college students. This site was selected because of its proximity to the researcher’s place of employment and his familiarity with both the institution and many of its administrators. Both Connelly College and the researcher’s institution are members of a consortium of institutions in the geographic region.
The primary subject in this study was Brian, who has served as the president of Connelly for approximately one year. Prior to arriving at Connelly, Brian was the president of a medium sized, private, nonsectarian university in the Midwest. A career administrator in higher education, Brian received his doctorate in education only three years prior to becoming the president of Connelly. Brian has spoken and written about his orientation as a servant leader.
In addition to Brian, the following people were identified for this study based on recommendations from Brian:
· Bruce – Executive-level administrator who holds a doctorate
· Betty – Mid-level administrator who holds a master’s degree and reports to Bruce
· Debbie – Senior administrator who holds a master’s degree
· Anna – Mid-level administrator who holds a master’s degree and reports to Debbie
· Madeline – Executive-level administrator who holds a doctorate
· Cathy – Senior administrator who holds a master’s degree and reports to Madeline
· Grace – Executive-level administrator who holds a master’s degree and is a member of the founding religious order of the college
· Evelyn – Mid-level administrator who holds a doctorate and reports to Grace
· Wade – Senior administrator who holds a master’s degree
· Bridget – Mid-level administrator who holds a master’s degree and reports to Wade
The names of all participants have been changed and limited information is provided here in order to maintain their privacy. All participants were provided with information regarding the process of the study and what was involved. Each participant completed an informed consent document before their interview began. A copy of the informed consent document is available in Appendix B.
Prior to the beginning of this study, all materials and plans were submitted for review by the Institutional Review Board (IRB) of the University of the Cumberlands and the IRB of Connelly College. Both IRBs have approved this project and copies of their approval can be found in Appendix A.
Each interview was recorded with both a digital recorder and a cell phone-based recording application and transcribed using Amazon Transcribe. Transcriptions of each interview was sent to the respective participant and each participant was offered an audio file of their interview.
Analysis
Phenomenological research generates a large amount of raw data which must be analyzed. However, before any analysis was conducted, the researcher reviewed his own notes and bracketed his own assumptions and judgements. This step is recommended by Moustakas (1994) in order to eliminate as much research bias as possible before data is analyzed. Though it is impossible to completely eliminate research bias in a qualitative and phenomenological study, this step is very helpful in reducing that bias. Moustakas (1994) identified four steps in analyzing phenomenological data: reduction, horizontalization, imaginative variation, and essence. The first step, reduction, is achieved by reducing the data into common themes and experiences as discussed by the participants. The second step, horizontalization, requires the researcher to obtain a “30,000 foot” view of the data in hopes of viewing it equally. The third step, imaginative variation, asks the researcher to view the data from counter and congruent perspectives. The fourth and final step, essence, draws the data and the other steps together to form an essence of the phenomenon and the experiences of the participants.
The researcher proceeded through the analysis process outlined above. He opted for structural coding with a set of a priori themes and references derived from the literature including listening, imagination, acceptance, accountability, persuasion, and conceptualization (Heyler & Martin, 2018); and empowering, helping subordinates grow and succeed, and behaving ethically (Liden et al., 2014). Emergent themes not suggested by the literature were also noted and coded as part of the process. The coding process and analysis was primarily conducted manually, but NVivo qualitative research software was used for data storage and assistance in identifying emergent themes.
Other Considerations
While all studies admit of some limitations, this study was potentially limited in the following ways. First, the sample consisted of individuals, including the president and three executive-level administrators, at a very small college. Though every assurance was given and step taken to reasonably guarantee the confidentiality of the data, some lower ranking administrators may have felt less inclined to share their full perspective. Second, Brian, the primary participant and president of Connelly College has only served in this role for one year. Once Brian has been in his position for more years there may be further diffusion of servant leadership throughout the administration.
References
Creswell, J. W. & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches. (5th ed.). Sage.
Greenleaf, R. K. (1970). The servant as leader. The Robert K. Greenleaf Center.
Greenleaf, R. K. (1977). Servant leader: A journey into the nature of legitimate power and greatness. Paulist Press.
Heyler, S. G. & Martin, J. A. (2018). Servant leadership theory: Opportunities for additional theoretical integration. Journal of Managerial Issues, 30(2), 230-243.
Jeyaraj, J. J. & Gandolfi, F. (2019). Exploring trust, dialogue, and empowerment in servant leadership: Insights from critical pedagogy. Journal of Management Research, 19(4), 285-290.
Liden, R. C., Wayne, S. J., Liao, C., & Meuser, J. D. (2014). Servant leadership and serving culture: Influence on individual and unit performance. Academy of Management Journal, 57(5), 1434- 1452.
Liu, H. (2019). Just the servant: An intersectional critique of servant leadership. Journal of Business Ethics, 156(4), 1099-1112.
Moustakas, C. (1994). Phenomenological research methods. Sage.
Newman, A., Neesham, C., Manville, G., & Tse, H. H. M. (2018). Examining the influence of servant and entrepreneurial leadership on the work outcomes of employees in social enterprises. The International Journal of Human Resource Management, 29(20), 2905-2926.
,
Chapter III: Methodology
As observed in recent years, large organizations are facing multiple data breaches from hackers who are trying to steal sensitive information. With the increase in technology, attackers with malicious intent are finding advanced methods to breach into the organization or even simple by exploiting known risks that could have been avoided by the organization by updating their systems on a regular basis (Alawneh, 2008). The most recent breach that we all were aware off was the Equifax data breach where it was reported that millions of its customers personal identifiable information (PII) were stolen, information such as social security number , an individual’s name, financial record, driver’s license number, etcetera.. The purpose of this paper is to how organizations can protect themselves from data breaches. What are the ways in which their data can be exposed and does employee awareness will help organizations protect themselves from being attacked from various sources?
Design of the study:
The data collected in this paper was a mixed approach. The collected data was both qualitative and quantitative in nature. The researcher developed a questionnaire containing both open-ended and close-ended questions and all of this was administered through LimeSurvey. And most of the participants response was recorded when they were trying to explain their understanding of a breach and how data leaks can be classified into intentional threats and inadvertent threats as these recording were transliterated for further analysis using otter A.I. And each session from the participant lasted from 45 – 90 minutes depending on the tasks being performed.
Data collection and participants:
All the participants involved in the sessions have worked or currently working in financial organizations. Participants read and signed a consent form explaining the purpose of the research and were given the option to skip a question if they feel uncomfortable in answering a question. Participants were recruited through ads from social media and from known contacts whose education and work experience are related to the study that was being conducted. There were around 50 participants in total, 30 males and 20 females, they ranged in age from 24 – 57. When asked about data breaches and do they know who their organizations protect itself from data breaches as part of employee awareness, 40 said yes and 10 said no. And when asked about taking or participating in any data security training conducted by the organizations, 35 said yes and 15 said no. And when asked about recording their answers on a device 45 said yes and 5 said no, For the 5, we noted down their answers on a paper with their consent.
Data analysis and Sampling procedures:
The participants responses were addressed in two sections which are quantitative and qualitative. When analyzing the qualitative data from the questionnaire we conducted a thematic analysis in order to better analyze the participants response to the open-ended questions (Namey, 2012). The researcher was mostly focused on finding a pattern, a trend or an similar ideas that will help in outlining the analysis. The researcher developed some code to identify similar themes and patterns. Then started to build an initial summary or view of these codes to identify the patterns in which the participants talked about how insider threats and data breaches and their choice of words and simultaneously looking for the alternative where they had different views/insights on a data breach. The top-level codes were associated with our research questions as they were primarily focused on (1) How many years have you been working in financial sector? (2) What are some the standard security measures that are in place? (3) How does your organization handle data breaches? (4) What are the consequences of a data breach? (5) How do you as an employee protect yourself from being a victim of a data breach? Several subcategories were created based on the response from the participants but note that the above are all high-level categories which are all clustered. Analysis were also conducted on the questionnaire or the survey whereby aggregating the score from positive minus to the negative ones. Each response to the question were scored 1 – 5 where 1 is always 5 is never and 3 is sometimes.
References:
Julisch K, Dacier M (2002). Mining intrusion detection alarms for actionable knowledge. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Alawneh M, Abbadi IM (2008). Preventing information leakage between collaborating organisations. In: Proceedings of the 10th International Conference on Electronic Commerce , ICEC.
Identity theft resource center (2019). Retrieved from https://www.idtheftcenter.org/ data-breaches/. Accessed: 2019-04-1.
Namey E. Guest G., MacQueen K. M (2012). Introduction to Applied Thematic Analysis. Applied Thematic Analysis. SAGE Publications.
,
Chapter Three: Procedures and Methodology
Introduction
The goal of education is to increase student achievement and knowledge of the material being taught. It is imperative that, if this is the real goal of education, search for best practices that assist in increasing student achievement. While many different aspects impact student achievement, expanding the practice efforts of educators to help in the classroom is beneficial (Tucker & Strange, 2020). The idea for the study focused around the theory, while it may be considered old by many in education today, from Benjamin Bloom and mastery learning and the utilization of formative assessments and individualized learning to drive instruction (Guskey, 2010). The purpose of this quantitative study was to determine the strength and nature of the relationship between the level of implementation of the diagnostic assessment software PowerSchool and student achievement on the eighth-grade mathematics TCAP test in a semi-rural system in northeast Tennessee.
The target system for this study served a total enrollment of 5,290 students, grades pre-k through grade 12, and consists of 15 schools and one alternative placement setting. While many of the schools from the target district are considered to perform at proficient levels for student achievement, others in the system are or are in danger of becoming target schools by the Tennessee Department of Education based on student achievement. As with all public schools in the state of Tennessee, all third through eighth-grade students in the target district partake in yearly TCAP testing in ELA, mathematics, science, and social studies. As discussed in Chapter 2, TCAP is a criterion-referenced assessment that, when coupled with TVAAS and value-added, is a reliable and valuable source of data for educators statewide. Chapter 3 discusses the methodology of the research as well as the utilization of the TCAP and TVAAS as sources of data. The chapter begins with an introduction and research paradigm and design before moving through the sampling procedures, data collection sources, statistical tests being utilized, and a summary of the chapter.
Research Paradigm
The review of the literature discussed in Chapter two explained how the use of formative assessments and mastery learning could be used to increase student achievement. Furthermore, as mentioned in Chapter One, the state of Tennessee, as well as the nation, is facing a crisis with a large percentage of today’s students performing below grade-level expectations. For this reason, systems nationwide have implemented programs specifically for assisting in increasing student achievement in mathematics such as Response to Intervention (RTI), and continual search for programs that can further help in this goal of improving student achievement and understanding in mathematics.
The goal of this study was to investigate the relationship between the level of implementation of the diagnostic assessment software PowerSchool and student achievement in eighth-grade mathematics in a semi-rural northeast Tennessee school system. A quantitative study was chosen for the study since quantitative research “entails the collection of numerical data and exhibiting the view of the relationship between theory and research as deductive, a predilection for natural science approach, and as having an objectivist conception of social reality” (Bryman & Bell, 2015, p. 160). The dependent variables for this quantitative study focused on the student different data on achievement results based on the eighth grade Tennessee Comprehensive Assessment Program (TCAP) performed in the spring semester of the 2020-2021 school year. The independent variable was based on the different levels of implementation of the diagnostic assessment software PowerSchool during the 2020-2021 school year in a school district in semi-rural northeast Tennessee.
Research Design
The research design for this study includes a correlation design utilizing the independent samples t-test and Chi-square to measure the strength and nature of the relationship between student achievement and the level of implementation of PowerSchool. A correlation design was chosen due to the desire to realize if and how strong of a relationship exists between the level of implementation and student achievement. One group of classes uses PowerSchool as merely a benchmark testing, making up less than five percent of the time spent utilizing PowerSchool for instructional purposes. In contrast, the second group not only uses PowerSchool for the system-wide benchmark testing but also weekly as formative assessments to drive the daily instruction, making up 50% or more of instructional time spent utilizing PowerSchool for instructional purposes. The research used the PowerSchool program, including criterion-referenced benchmark exams based on Tennessee state standards provided through the program, as well as a second criterion-based test in the Tennessee Comprehensive Assessment Program (TCAP) during the 2020-2021 school year. PowerSchool was implemented through an online platform inside the individual classrooms as well as home access provided at home. The TCAP test was presented in the paper-pencil format during Spring 2021 by exiting eighth-grade students in Northeastern Tennessee.
The researcher chose three different sources to serve as dependent variables, one for each research question, and one for the independent variable for each. The dependent variables were based on the different degrees of measure of student achievement on the 8th grade TCAP mathematics test: individual composite scores, TVAAS value-added for each teacher participating in the study based on TCAP scores, and the level of achievement of each student. The basis for the independent variable was the two levels of implementation of the diagnostic assessment software PowerSchool: a full implementation that was used to drive the curriculum and implementation for benchmark testing purposes only.
Sampling Procedures
Prior to conducting this study, approval was asked for and obtained from the University of the Cumberlands Institutional Review Board (IRB). The target school system chosen for this study has acknowledged that a problem exists with student achievement in TCAP testing, especially in middle school mathematics. For this reason, the system implemented the mandatory use of benchmark testing (three total tests throughout the school year) utilizing the PowerSchool software system-wide during the 2019-2020 school year. Permission was granted to conduct research through the district in question by the curriculum supervisor (see Appendix A).
The targeted semi-rural district located in northeast Tennessee was relatively large for a single district. According to data obtained from the personnel department of the target school district, during the 2020-2021 school year, the district employed 473 professional employees: 10 supervisors, 16 principals, eight assistant principals, 18 system-wide support supervisors (curriculum coaches, testing coordinators, etc.), and 421 classroom teacher. Furthermore, the targeted district consists of 15 schools serving students in grades pre-kindergarten – twelfth grade and one alternative placement school. The 15 schools served 5,290 total students containing 756 students that qualify for special education services. The system is considered “direct serve,” which indicates all students kindergarten – eighth grade receive free breakfast and lunch. Each school in the system qualifies as Title 1 schools. The percentage of the ethnic diversity of the 5,290 total students served during the 2020-2021 school year consisted of 95.7% Caucasian, 2.28% Hispanic, and 2.02% identifying as other.
Due to the nature of the study, a non-random, convenience sampling method was chosen for participants. Convenience samples are defined as the “non-probability sampling method that relies on data collection from population members who are conveniently available to participate in the study” (Convenience, 2019). Because convenience sampling was utilized for this study, the study lacks the desired trait of randomness in sampling. However, the purpose of this study was to identify if a relationship between the level of implementation of the diagnostic assessment software PowerSchool in a local northeast Tennessee school system, thus the research and results may not produce data that can be generalized to an overall population. Furthermore, including all eighth-grade students in the targeted district helps to strengthen the validity of the study.
The targeted system consists of seven middle schools, three of which implementing full PowerSchool (50% of instruction) classified as Group X and five only utilizing the program for benchmark tests only ( less than 5% of instruction time) classified as Group Y. The convenience of using all seven middle schools was appropriate. Of the 5,290 total students served by the district, 404 students were served in eighth grade, represented 8.37 % of the population. For this study, the eighth grades were separated into two groups: Group X consisted of 188 individual students ( n = 188) and four teachers, and Group Y consisted of 216 individual students (n = 216) and four teachers.
Data Collection Sources
This study based the collection of data primarily from the results of the eighth grade Tennessee Comprehensive Assessment Program (TCAP) as well as the value-added results formulated from TVAAS. As previously discussed, the TCAP assessment is assumed to be valid and reliable criterion-based. The TCAP assessments will be completed during April 2021, and results will be finalized and reported back to the system during the summer of 2021. TCAP testing is implemented for all students grades three through eight throughout the state of Tennessee. Once the results are reported back to the system, the system will contact the researcher and provide access to the student’s results in coded form for each individual that are part of the study.
Figure 1 Individual student TCAP report
The data received from the TCAP results, as well as the TVAAS value-added reporting for each student and teacher involved in the study were then collected, coded, and organized.
The data was collected from the testing department of the targeted school system. In order to prevent bias testing, the data was organized into two groups based on the level of implementation in the study: Group X (full implementation) and Group Y (benchmark utilization only). The testing department also provided the data in each group without the individual names of teachers, students, schools, or any other personal data that could be used as identifying markers. The student’s data were numbered using three-digit codes beginning with 001 for the analysis of the composite scores as well as the level of achievement and value-added data. Table 1, 2, and 3 represents the manner in which the researcher organized the data.
Table 1. Group X student TCAP data for the 2020-2021 school year.
Student |
Composite Score |
Equivalent Level of Achievement |
Amount of Value-added |
001 |
|||
002 |
|||
003 |
|||
… |
Table 2. Group Y student TCAP data for 2020-2021 school year.
Student |
Composite Score |
Equivalent Level of Achievement |
Amount of Value-added |
001 |
|||
002 |
|||
003 |
|||
… |
Table 3. The number of students in each level of achievement for both Group X and Group Y on the TCAP test for the 2020-2021 school year.
Level 1: Below |
Level 2: Approaching |
Level 3: On Track |
Level 4: Mastery |
|
Group X |
||||
Group Y |
This method of data collection was chosen in hopes of maintaining confidentiality as well as preventing any bias results from the study.
Statistical Tests
The researcher utilized descriptive and inferential data to analyze the data for this quantitative study to determine if a significant difference exists. The researcher performed independent sample t-tests to analyze the individual student composite scores provided by performance on the TCAP test as well as the amount of value that was provided by the TVAAS value-added report. According to SPSS, independent samples t-test is utilized to compare “the means of two independent groups to determine whether there is statistical evidence that the associated population means are significantly different” (2020). The researcher chose to perform Chi-square to determine if a significant distance exists between the in the number of students in each level of achievement on the 8th Grade TCAP test (Below, Approaching, On Track, Mastered) between classes with different levels of PowerSchool implementation (full implementation as opposed to benchmark usage only). For all three tests, the data was analyzed with a confidence level set at p = 0.05 to determine if a significant difference exists. Table 4 represents the data collection and statistical test matrix the researcher utilized for this study.
Table 4. Data collection and statistical test matrix.
Research Question |
Data Collection Sources |
Statistical Test |
Is there a significant difference in the Tennessee Comprehensive Assessment Program (TCAP) 8th Grade composite math scores between classes with different levels of PowerSchool implementation (full implementation as opposed to benchmar Related TagsAcademic APA Assignment Business Capstone College Conclusion Course Day Discussion Double Spaced Essay English Finance General Graduate History Information Justify Literature Management Market Masters Math Minimum MLA Nursing Organizational Outline Pages Paper Presentation Questions Questionnaire Reference Response Response School Subject Slides Sources Student Support Times New Roman Title Topics Word Write Writing |